Experts: Human Involvement Can Help Prevent AI Mistakes
2022-10-21
LRC
TXT
大字
小字
滚动
全页
1Security experts say artificial intelligence (AI) systems used by businesses can make serious, costly mistakes.
2But one way to avoid such mistakes is for companies to employ humans to closely watch the AI.
3One example of AI problems that can affect businesses happened early in the COVID-19 pandemic.
4It involved the credit scoring company Fair Isaac Corporation, which is known as FICO.
5FICO is used by about two-thirds of the world's largest banks to help make lending decisions.
6The company's systems are also used to identify possible cases of credit fraud.
7FICO officials recently told Reuters news agency that one of the company's AI systems misidentified a large number of credit card fraud cases.
8At the time, the pandemic had caused a large increase in online shopping.
9The AI tool considered the rise in online shopping to be the result of fraudulent activity.
10As a result, the AI system told banks to deny millions of purchase attempts from online buyers.
11The incident happened just as people were hurrying to buy products that were in short supply in stores.
12But FICO told Reuters that in the end, very few buyers had their purchase requests denied.
13This is because a group of experts the company employs to observe, or monitor, its AI systems recognized the false fraud identifications.
14The workers made temporary adjustments to avoid an AI-ordered block on spending.
15FICO says the expert team is quickly informed about any unusual buying activity that the AI systems might misidentify.
16But these kinds of corporate teams are not that common, Reuters reports.
17Last year, FICO and the business advisory company McKinsey & Company carried out separate studies on the subject.
18They found that most organizations involved in the study were not closely watching their AI-based programs.
19Experts say AI systems mainly make mistakes when real-world situations differ from the situations used in creating the intelligence.
20In FICO's case, it said its software expected more in-person than online shopping.
21This led the system to identify a greater share of financial activity as problematic.
22Seasonal differences, data-quality changes or extremely unusual events - such as the pandemic - can lead to a series of bad AI predictions.
23Aleksander Madry is the director of the Center for Deployable Machine Learning at the Massachusetts Institute of Technology.
24He told Reuters the pandemic must have been a "wake-up call" for businesses not closely monitoring their AI systems.
25This is because AI mistakes can cause huge problems for businesses that do not effectively manage the systems.
26"That's what really stops us currently from this dream of AI revolutionizing everything," Madry said.
27Urgency has been added to the issue since the European Union plans to pass a new AI law as soon as next year.
28The law requires companies to do some observation of their AI systems.
29Earlier this month, the U.S. administration also proposed new guidelines aimed at protecting citizens from the harmful effects of AI.
30In the guidelines, U.S. officials called for observers to ensure AI system "performance does not fall below an acceptable level over time."
31I'm Bryan Lynn.
1Security experts say artificial intelligence (AI) systems used by businesses can make serious, costly mistakes. But one way to avoid such mistakes is for companies to employ humans to closely watch the AI. 2One example of AI problems that can affect businesses happened early in the COVID-19 pandemic. It involved the credit scoring company Fair Isaac Corporation, which is known as FICO. 3FICO is used by about two-thirds of the world's largest banks to help make lending decisions. The company's systems are also used to identify possible cases of credit fraud. 4FICO officials recently told Reuters news agency that one of the company's AI systems misidentified a large number of credit card fraud cases. At the time, the pandemic had caused a large increase in online shopping. The AI tool considered the rise in online shopping to be the result of fraudulent activity. 5As a result, the AI system told banks to deny millions of purchase attempts from online buyers. The incident happened just as people were hurrying to buy products that were in short supply in stores. 6But FICO told Reuters that in the end, very few buyers had their purchase requests denied. This is because a group of experts the company employs to observe, or monitor, its AI systems recognized the false fraud identifications. The workers made temporary adjustments to avoid an AI-ordered block on spending. 7FICO says the expert team is quickly informed about any unusual buying activity that the AI systems might misidentify. 8But these kinds of corporate teams are not that common, Reuters reports. Last year, FICO and the business advisory company McKinsey & Company carried out separate studies on the subject. They found that most organizations involved in the study were not closely watching their AI-based programs. 9Experts say AI systems mainly make mistakes when real-world situations differ from the situations used in creating the intelligence. In FICO's case, it said its software expected more in-person than online shopping. This led the system to identify a greater share of financial activity as problematic. 10Seasonal differences, data-quality changes or extremely unusual events - such as the pandemic - can lead to a series of bad AI predictions. 11Aleksander Madry is the director of the Center for Deployable Machine Learning at the Massachusetts Institute of Technology. He told Reuters the pandemic must have been a "wake-up call" for businesses not closely monitoring their AI systems. This is because AI mistakes can cause huge problems for businesses that do not effectively manage the systems. 12"That's what really stops us currently from this dream of AI revolutionizing everything," Madry said. 13Urgency has been added to the issue since the European Union plans to pass a new AI law as soon as next year. The law requires companies to do some observation of their AI systems. Earlier this month, the U.S. administration also proposed new guidelines aimed at protecting citizens from the harmful effects of AI. In the guidelines, U.S. officials called for observers to ensure AI system "performance does not fall below an acceptable level over time." 14I'm Bryan Lynn. 15Reuters reported this story. Bryan Lynn adapted the report for VOA Learning English. 16____________________________________________________________________ 17Words in This Story 18artificial intelligence - n. the development of computer systems with the ability to perform work that normally requires human intelligence 19lend - v. to give something to someone for a period of time with the expectation that they will then give it back later 20fraud - n. the crime of using dishonest methods to take something valuable from another person; trickery; deceit 21adjustment - n. a slight change made to something so that it works better 22confuse - v. to make someone unable to think clearly or understand something 23manage - v. to do something or deal with something successfully 24________________________________________________________________________ 25What do you think of this story? We want to hear from you. We have a new comment system. Here is how it works: 26Each time you return to comment on the Learning English site, you can use your account and see your comments and replies to them. Our comment policy is here.